Goto

Collaborating Authors

 stage classification


An Anatomy Aware Hybrid Deep Learning Framework for Lung Cancer Tumor Stage Classification

Chowdhury, Saniah Kayenat, Sarmun, Rusab, Chowdhury, Muhammad E. H., Zoghoul, Sohaib Bassam, Al-Hashimi, Israa, Mushtak, Adam, Khandakar, Amith

arXiv.org Artificial Intelligence

Accurate lung cancer tumor staging is crucial for prognosis and treatment planning. However, it remains challenging for end-to-end deep learning approaches, as such approaches often overlook spatial and anatomical information that are central to the tumor-node-metastasis system. The tumor stage depends on multiple quantitative criteria, including the tumor size and its proximity to the nearest anatomical structures, and small variations can alter the staging outcome. We propose a medically grounded hybrid pipeline that performs staging by explicitly measuring the tumor's size and distance properties rather than treating it as a pure image classification task. Our method employs specialized encoder-decoder networks to precisely segment the lung and adjacent anatomy, including the lobes, tumor, mediastinum, and diaphragm. Subsequently, we extract the necessary tumor properties, i.e. measure the largest tumor dimension and calculate the distance between the tumor and neighboring anatomical structures by a quantitative analysis of the segmentation masks. Finally, we apply rule-based tumor staging aligned with the medical guidelines. This novel framework has been evaluated on the Lung-PET-CT-Dx dataset, demonstrating superior performance compared to traditional deep learning models, achieving an overall classification accuracy of 91.36%. We report the per-stage F1-scores of 0.93 (T1), 0.89 (T2), 0.96 (T3), and 0.90 (T4), a critical evaluation aspect often omitted in prior literature. To our knowledge, this is the first study that embeds explicit clinical context into tumor stage classification. Unlike standard convolutional neural networks that operate in an uninterpretable "black box" manner, our method offers both state-of-the-art performance and transparent decision support.


Energy-Efficient Real-Time 4-Stage Sleep Classification at 10-Second Resolution: A Comprehensive Study

Mohammadi, Zahra, Fazel, Parnian, Mohammadi, Siamak

arXiv.org Artificial Intelligence

--Sleep stage classification plays a crucial role in health monitoring, particularly for diagnosing and managing sleep disorders such as sleep apnea and insomnia. However, conventional clinical approaches like polysomnography are often costly, inconvenient, and impractical for long-term, home-based monitoring. In this study, we present an energy-efficient classification approach for detecting four sleep stages--wake, rapid eye movement (REM), light sleep, and deep sleep--using a single-lead electrocardiogram (ECG) signal. We evaluate and compare the performance of various machine-learning and deep-learning models. T o support this, we introduce two novel windowing strategies: (1) a 5-minute window with 30-second steps for machine-learning models utilizing handcrafted features, and (2) a 30-second window with 10-second steps for deep-learning models, enabling near-real-time predictions with 10-second temporal resolution. Although lightweight, our deep-learning models--such as MobileNet-v1--achieve high classification performance (up to 92% accuracy and 91% F1-score), their energy demands remain high, making them sub-optimal for wearable applications. T o address this, we design a SleepLiteCNN optimized specifically for ECG-based sleep staging. T o further enhance efficiency, we apply 8-bit quantization, which leaves classification performance unchanged, reducing the energy usage of our SleepLiteCNN to just 5.48 µJ per inference at a 45 nm technology node, with 90% accuracy and 90% F1-score. We further demonstrate that deploying this SleepLiteCNN on a field-programmable gate array (FPGA) significantly reduces resource usage through quantization. Overall, this approach provides a practical and efficient solution for continuous ECG-based sleep monitoring in compact, resource-constrained wearable devices. LEEP Sleep stage classification is crucial in health monitoring, especially for diagnosing and managing sleep disorders such as sleep apnea and insomnia [2]. Sleep stages are categorized into wake, REM, and Non-Rapid Eye Movement (NREM) stages. Z. Mohammadi is a Ph.D. Candidate in the School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran (e-mail: zahramo-hammmadi@ut.ac.ir). Fazel is a Graduate of the School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran (e-mail: parnian.fazel@ut.ac.ir).


MC2SleepNet: Multi-modal Cross-masking with Contrastive Learning for Sleep Stage Classification

Na, Younghoon, Ahn, Hyun Keun, Lee, Hyun-Kyung, Lee, Yoongeol, Oh, Seung Hun, Kim, Hongkwon, Lee, Jeong-Gun

arXiv.org Artificial Intelligence

Sleep profoundly affects our health, and sleep deficiency or disorders can cause physical and mental problems. Despite significant findings from previous studies, challenges persist in optimizing deep learning models, especially in multi-modal learning for high-accuracy sleep stage classification. Our research introduces MC2SleepNet (Multi-modal Cross-masking with Contrastive learning for Sleep stage classification Network). It aims to facilitate the effective collaboration between Convolutional Neural Networks (CNNs) and Transformer architectures for multi-modal training with the help of contrastive learning and cross-masking. Raw single channel EEG signals and corresponding spectrogram data provide differently characterized modalities for multi-modal learning. Our MC2SleepNet has achieved state-of-the-art performance with an accuracy of both 84.6% on the SleepEDF-78 and 88.6% accuracy on the Sleep Heart Health Study (SHHS). These results demonstrate the effective generalization of our proposed network across both small and large datasets.


SleepFM: Multi-modal Representation Learning for Sleep Across Brain Activity, ECG and Respiratory Signals

Thapa, Rahul, He, Bryan, Kjaer, Magnus Ruud, Moore, Hyatt, Ganjoo, Gauri, Mignot, Emmanuel, Zou, James

arXiv.org Artificial Intelligence

Sleep is a complex physiological process evaluated through various modalities recording electrical brain, cardiac, and respiratory activities. We curate a large polysomnography dataset from over 14,000 participants comprising over 100,000 hours of multi-modal sleep recordings. Leveraging this extensive dataset, we developed SleepFM, the first multi-modal foundation model for sleep analysis. We show that a novel leave-one-out approach for contrastive learning significantly improves downstream task performance compared to representations from standard pairwise contrastive learning. A logistic regression model trained on SleepFM's learned embeddings outperforms an end-to-end trained convolutional neural network (CNN) on sleep stage classification (macro AUROC 0.88 vs 0.72 and macro AUPRC 0.72 vs 0.48) and sleep disordered breathing detection (AUROC 0.85 vs 0.69 and AUPRC 0.77 vs 0.61). Notably, the learned embeddings achieve 48% top-1 average accuracy in retrieving the corresponding recording clips of other modalities from 90,000 candidates. This work demonstrates the value of holistic multi-modal sleep modeling to fully capture the richness of sleep recordings. SleepFM is open source and available at https://github.com/rthapa84/sleepfm-codebase.


NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEG

Lee, Cheol-Hui, Kim, Hakseung, Han, Hyun-jee, Jung, Min-Kyung, Yoon, Byung C., Kim, Dong-Joo

arXiv.org Artificial Intelligence

Abstract--The classification of sleep stages is a pivotal aspect of diagnosing sleep disorders and evaluating sleep quality. However, the conventional manual scoring process, conducted by clinicians, is time-consuming and prone to human bias. Recent advancements in deep learning have substantially propelled the automation of sleep stage classification. Nevertheless, challenges persist, including the need for large datasets with labels and the inherent biases in human-generated annotations. This paper introduces NeuroNet, a self-supervised learning (SSL) framework designed to effectively harness unlabeled single-channel sleep electroencephalogram (EEG) signals by integrating contrastive learning tasks and masked prediction tasks. NeuroNet demonstrates superior performance over existing SSL methodologies through extensive experimentation conducted across three polysomnography (PSG) datasets. Additionally, this study proposes a Mamba-based temporal context module to capture the relationships among diverse EEG epochs. Combining NeuroNet with the Mamba-based temporal context module has demonstrated the capability to achieve, or even surpass, the performance of the latest supervised learning methodologies, even with a limited amount of labeled data. This study is expected to establish a new benchmark in sleep stage classification, promising to guide future research and applications in the field of sleep analysis.


Sleep Stage Classification Using a Pre-trained Deep Learning Model

Ardeshir, Hassan, Araghi, Mohammad

arXiv.org Artificial Intelligence

One of the common human diseases is sleep disorders. The classification of sleep stages plays a fundamental role in diagnosing sleep disorders, monitoring treatment effectiveness, and understanding the relationship between sleep stages and various health conditions. A precise and efficient classification of these stages can significantly enhance our understanding of sleep-related phenomena and ultimately lead to improved health outcomes and disease treatment. Models others propose are often time-consuming and lack sufficient accuracy, especially in stage N1. The main objective of this research is to present a machine-learning model called "EEGMobile". This model utilizes pre-trained models and learns from electroencephalogram (EEG) spectrograms of brain signals. The model achieved an accuracy of 86.97% on a publicly available dataset named "Sleep-EDF20", outperforming other models proposed by different researchers. Moreover, it recorded an accuracy of 56.4% in stage N1, which is better than other models. These findings demonstrate that this model has the potential to achieve better results for the treatment of this disease.


Enhancing Healthcare with EOG: A Novel Approach to Sleep Stage Classification

Maiti, Suvadeep, Sharma, Shivam Kumar, Bapi, Raju S.

arXiv.org Artificial Intelligence

We introduce an innovative approach to automated sleep stage classification using EOG signals, addressing the discomfort and impracticality associated with EEG data acquisition. In addition, it is important to note that this approach is untapped in the field, highlighting its potential for novel insights and contributions. Our proposed SE-Resnet-Transformer model provides an accurate classification of five distinct sleep stages from raw EOG signal. Extensive validation on publically available databases (SleepEDF-20, SleepEDF-78, and SHHS) reveals noteworthy performance, with macro-F1 scores of 74.72, 70.63, and 69.26, respectively. Our model excels in identifying REM sleep, a crucial aspect of sleep disorder investigations. We also provide insight into the internal mechanisms of our model using techniques such as 1D-GradCAM and t-SNE plots. Our method improves the accessibility of sleep stage classification while decreasing the need for EEG modalities. This development will have promising implications for healthcare and the incorporation of wearable technology into sleep studies, thereby advancing the field's potential for enhanced diagnostics and patient comfort.


Exploiting Spatial-temporal Data for Sleep Stage Classification via Hypergraph Learning

Liu, Yuze, Zhao, Ziming, Zhang, Tiehua, Wang, Kang, Chen, Xin, Huang, Xiaowei, Yin, Jun, Shen, Zhishu

arXiv.org Artificial Intelligence

Sleep stage classification is crucial for detecting patients' health conditions. Existing models, which mainly use Convolutional Neural Networks (CNN) for modelling Euclidean data and Graph Convolution Networks (GNN) for modelling non-Euclidean data, are unable to consider the heterogeneity and interactivity of multimodal data as well as the spatial-temporal correlation simultaneously, which hinders a further improvement of classification performance. In this paper, we propose a dynamic learning framework STHL, which introduces hypergraph to encode spatial-temporal data for sleep stage classification. Hypergraphs can construct multi-modal/multi-type data instead of using simple pairwise between two subjects. STHL creates spatial and temporal hyperedges separately to build node correlations, then it conducts type-specific hypergraph learning process to encode the attributes into the embedding space. Extensive experiments show that our proposed STHL outperforms the state-of-the-art models in sleep stage classification tasks.


EEG-based Sleep Staging with Hybrid Attention

Zhou, Xinliang, Liu, Chenyu, Xiao, Jiaping, Liu, Yang

arXiv.org Artificial Intelligence

Sleep staging is critical for assessing sleep quality and diagnosing sleep disorders. However, capturing both the spatial and temporal relationships within electroencephalogram (EEG) signals during different sleep stages remains challenging. In this paper, we propose a novel framework called the Hybrid Attention EEG Sleep Staging (HASS) Framework. Specifically, we propose a well-designed spatio-temporal attention mechanism to adaptively assign weights to inter-channels and intra-channel EEG segments based on the spatio-temporal relationship of the brain during different sleep stages. Experiment results on the MASS and ISRUC datasets demonstrate that HASS can significantly improve typical sleep staging networks. Our proposed framework alleviates the difficulties of capturing the spatial-temporal relationship of EEG signals during sleep staging and holds promise for improving the accuracy and reliability of sleep assessment in both clinical and research settings.


Sleep Model -- A Sequence Model for Predicting the Next Sleep Stage

Choi, Iksoo, Sung, Wonyong

arXiv.org Artificial Intelligence

As sleep disorders are becoming more prevalent there is an urgent need to classify sleep stages in a less disturbing way.In particular, sleep-stage classification using simple sensors, such as single-channel electroencephalography (EEG), electrooculography (EOG), electromyography (EMG), or electrocardiography (ECG) has gained substantial interest. In this study, we proposed a sleep model that predicts the next sleep stage and used it to improve sleep classification accuracy. The sleep models were built using sleep-sequence data and employed either statistical $n$-gram or deep neural network-based models. We developed beam-search decoding to combine the information from the sensor and the sleep models. Furthermore, we evaluated the performance of the $n$-gram and long short-term memory (LSTM) recurrent neural network (RNN)-based sleep models and demonstrated the improvement of sleep-stage classification using an EOG sensor. The developed sleep models significantly improved the accuracy of sleep-stage classification, particularly in the absence of an EEG sensor.